1. Parameter Estimation of ARMA(p,q)


  • Yule-Walker for AR parameters
  • Conditional Sum of Squares Estimators
  • Maximum Likelihood Estimators



MLE

% Brockwell p158

Suppose \(e_t \sim N(0,\sigma^2)\). Then causal ARMA(\(p,q\)) model \(X_t\) has to be normal as well, \[ X_t \hspace{3mm} = \hspace{3mm} \sum_{i=0}^\infty \psi_i e_{t-i} \\ \\ X_t \sim N\Big(0, \gamma(0)\Big) \] We know that \(X_{t+1}\) has same distribution, and covariance between \(X_t\) and \(X_{t+1}\) is \(\gamma(1)\).



Multivariate Normal

  • That means, \(\{X_1, \ldots, X_n\}\) is \(n\)-dimentional multivariate normal vector.

\[ \left[ \begin{array}{ccccc} X_1 \\ X_2 \\ \vdots \\ X_n \\ \end{array} \right] \sim N_n \Big(0, \mathbf \Sigma\Big), \hspace{5mm} \mbox{ where } {\small \mathbf \Sigma = \left[ \begin{array}{c c c c} \gamma(0) &\gamma(1) &\cdots &\gamma(n-1) \\ \gamma(1) &\gamma(0) &\cdots &\gamma(n-2) \\ \vdots &\vdots & & \vdots \\ \gamma(n-1) &\gamma(n-2) &\cdots &\gamma(0) \\ \end{array} \right]} \]

  • Mean of each element is 0, and covariance matrix is \[ \mathbf \Sigma = \mathbf \Gamma_n \]

  • We know that joint pdf of multivariate Normal vector with covariance matrix \(\mathbf \Gamma_n\) is \[ L(\phi, \theta, \sigma) = \frac{1}{(2 \pi)^{n/2} (\mbox{det}\mathbf \Gamma_n)^{1/2} } \exp\Big( - \frac{1}{2} \mathbf X'_n \mathbf \Gamma_n^{-1} \mathbf X_n \Big) \]

  • When MLE is used, the parameters must be searched numerically. Therefore, we need reasonable initial value from preliminary estimators.

  • It is popular to assume that the errors are Normal, and use the Gaussian Likelihood in MLE. For ARMA model, you will not be penalized so much when the assumption of Normality is violated.



Large-Sample Property of MLE

% BD p.162

  • For large sample from an ARMA(\(p,q\)) process, MLE has sample distribution,

\[ \hat \beta \approx N\Big(\beta, \frac{V(\beta)}{n}\Big) \] where \(\beta = (\phi_1, \ldots, \phi_p, \theta_1, \ldots, \theta_q, \sigma)^T\)



Example: AR(p)

\[ V(\beta) = \sigma^2 \mathbf \Gamma_p^{-1} \] This is same as asymptotic covariance matrix of Yule-Walker estimators.

\[ \mbox{ AR(1): } \hspace{3mm} V(\phi_1) \hspace{3mm} = \hspace{3mm} (1-\phi_1^2) \\\\ \mbox{ AR(2): } \hspace{3mm} V(\phi_1, \phi_2) \hspace{3mm} = \hspace{3mm} \left[ \begin{array}{ccccc} 1-\phi_2^2 & -\phi_1 (1+\phi_2) \\ -\phi_1 (1+\phi_2) & 1-\phi_2^2 \\ \end{array} \right] \]



Example: MA(q)

\[ \mbox{ MA(1): } \hspace{3mm} V(\theta_1) \hspace{3mm} = \hspace{3mm} (1-\theta_1^2) \\\\ \mbox{ MA(2): } \hspace{3mm} V(\theta_1,\theta_2) \hspace{3mm} = \hspace{3mm} \left[ \begin{array}{ccccc} 1-\theta_2^2 & -\theta_1 (1+\theta_2) \\ -\theta_1 (1+\theta_2) & 1-\theta_2^2 \\ \end{array} \right] \]



Example: ARMA(1,1)

\[ V(\phi_1,\theta_1) \hspace{3mm} = \hspace{3mm} \frac{1- \phi_1 \theta_1}{(\phi_1-\theta_1)^2} \left[ \begin{array}{ccccc} (1-\phi_1^2)(1-\phi_1 \theta_1) & -(1-\theta_1^2)(1-\phi_1^2) \\ -(1-\theta_1^2)(1-\phi_1^2) & (1-\phi_1^2)(1-\phi_1\theta_1) \\ \end{array} \right] \]



MLE and CSS

  • We know that joint pdf of multivariate Normal vector with covariance matrix \(\mathbf \Gamma_n\) is \[ L(\phi, \theta, \sigma) \hspace{3mm} = \hspace{3mm} \frac{1}{(2 \pi)^{n/2} (\mbox{det}\mathbf \Gamma_n)^{1/2} } \exp\Big( - \frac{1}{2} \mathbf X'_n \mathbf \Gamma_n^{-1} \mathbf X_n \Big) \]

  • Recall from the innovations algorithm, \[ \left[ \begin{array}{c} X_1 - \hat X_1(1) \\ X_2 - \hat X_2(1) \\ X_3 - \hat X_3(1) \\ X_4 - \hat X_4(1) \\ \end{array}\right] \hspace{3mm} = \hspace{3mm} \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ a_{11} & 1 & 0 & 0 \\ a_{21} & a_{22} & 1 & 0 \\ a_{31} & a_{32} & a_{33} & 1 \\ \end{array} \right] \hspace{3mm} \left[ \begin{array}{l} X_1 \\ X_2 \\ X_3 \\ X_4 \\ \end{array} \right] \\ \\ \mathbf X_n - \mathbf {\hat X_n} \hspace{3mm} = \hspace{3mm} \mathbf A_n \mathbf X_n \] or \[ \mathbf X_n \hspace{3mm} = \hspace{3mm} \mathbf A_n^{-1} \Big( \mathbf X_n - \mathbf {\hat X_n} \Big) \hspace{5mm} \mbox{ where } \hspace{5mm} \mathbf A_n^{-1} = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ \theta_{11} & 1 & 0 & 0 \\ \theta_{21} & \theta_{22} & 1 & 0 \\ \theta_{31} & \theta_{32} & \theta_{33} & 1 \\ \end{array} \right]. \]


  • We know that for random vector \(\mathbf Y\) and \(\mathbf X\) having relationship $ Y = B X $, we can write the covariance matrix as \[ \mathbf \Sigma_{YY} \hspace{3mm} = \hspace{3mm} \mathbf B \mathbf \Sigma_{XX} \mathbf B' \]


  • That means, if we can write \(\mathbf X_n = \mathbf A_n^{-1} (\mathbf X_n - \mathbf{\hat X_n})\) as in last page, then \[ \mathbf \Gamma_n \hspace{3mm} = \hspace{3mm} \mathbf A_n^{-1} \, E\Big[ (\mathbf X_n - \mathbf{\hat X_n})(\mathbf X_n - \mathbf{\hat X_n})'\Big] \, (\mathbf A_n^{-1})'. \] It so happens that covariance matrix of the innovations is \[ E\Big[ (\mathbf X_n - \mathbf{\hat X_n})(\mathbf X_n - \mathbf{\hat X_n})'\Big] = \mbox{diag}( \nu_0, \ldots, \nu_{n-1} ) = \mathbf D_n \] where \(\nu_i\) are from the innovations algorithm.


  • So we have \[ \begin{align} \mathbf \Gamma_n &= \hspace{3mm} A_n^{-1} \mathbf D_n (A_n^{-1})', \\ \mathbf \Gamma_n^{-1} &= \hspace{3mm} \Big(A_n^{-1} \mathbf D_n (A_n^{-1})'\Big)^{-1} \\ \mathbf X_n' \mathbf \Gamma_n^{-1} \mathbf X_n &= \hspace{3mm} \mathbf X_n' \Big( A_n^{-1} \mathbf D_n (A_n^{-1})'\Big)^{-1} \mathbf X_n \\ \\ \mathbf X_n' \mathbf \Gamma_n^{-1} \mathbf X_n &= \hspace{3mm} \mathbf X_n' \Big( A_n' \mathbf D_n^{-1} A_n \Big) \mathbf X_n \\ \\ \mathbf X_n' \mathbf \Gamma_n^{-1} \mathbf X_n &= \hspace{3mm} (\mathbf X_n - \mathbf{\hat X_n})' \mathbf D_n^{-1} (\mathbf X_n - \mathbf{\hat X_n}) \hspace{3mm} = \hspace{3mm} \sum_{i=1}^n \Big(X_j - \hat X_j\Big)^2 / \nu_{j-1}. \end{align} \]

\[ \mbox{det}\Gamma_n = (\mbox{det} A_n^{-1}) (\mbox{det} D_n) (\mbox{det} A_n^{-1}) = \nu_0 \nu_1 \cdots \nu_{n-1}. \]



Rewriting the LH

\[ L(\phi, \theta, \sigma) \hspace{3mm} = \hspace{3mm} \frac{1}{(2 \pi)^{n/2} (\mbox{det}\mathbf \Gamma_n)^{1/2} } \exp\Big( - \frac{1}{2} \mathbf X'_n \mathbf \Gamma_n^{-1} \mathbf X_n \Big) \\ \\ \hspace{3mm} = \hspace{3mm} \frac{1}{ \sqrt{(2 \pi)^n \nu_0 \cdots \nu_{n-1} }} \exp\Big\{ - \frac{1}{2} \sum_{i=1}^n (X_j-\hat X_j)^2/\nu_{j-1}\Big\} \\ \\ \hspace{3mm} = \hspace{3mm} \frac{1}{ \sqrt{(2 \pi \sigma^2 )^n \, r_0 \cdots r_{n-1} }} \exp\Big\{ - \frac{1}{2 \sigma^2} \sum_{i=1}^n (X_j-\hat X_j)^2/r_{j-1}\Big\} \] by letting \(r_i = \sigma^2 \nu_i\).

\[ L(\phi, \theta, \sigma) \hspace{3mm} = \hspace{3mm} \frac{1}{ \sqrt{(2 \pi \sigma^2 )^n \, r_0 \cdots r_{n-1} }} \exp\Big\{ - \frac{1}{2 \sigma^2} \sum_{i=1}^n (X_j-\hat X_j)^2/r_{j-1}\Big\} \] Letting \(m=max(p,q)\), \[ \hat X_j \hspace{3mm} = \hspace{3mm} \left\{ \begin{array}{ll} \sum_{i=1}^n \theta_{nj} \Big(X_{n+1-j} - \hat X_{n+1-j}\Big)& \mbox{ if } 1 \leq n < m \\ \phi_1 X_n + \cdots + \phi_p X_{n+1-p} + \sum_{j=1}^q \theta_{nj} \Big(X_{n+1-j} - \hat X_{n+1-j}\Big) & \mbox{ if } n \geq m \\ \end{array} \right. \] using \(\theta_{nj}\) and \(r_n\) determined by the innovations algorithm.

With \[ S(\phi,\theta) = \sum_{i=1}^n (X_j - \hat X_j)^2 / r_{j-1}, \] MLE will minimize the log-likelihood function \[ \ell(\phi, \theta) = \ln( S(\phi,\theta)/n) + \frac{1}{n} \sum_{i=1}^n \ln( r_{j-1} ) \] and let \(\hat \sigma^2 = S(\phi,\theta)/n\).



CSS

  • Conditional Sum of Squares Estimator minimizes \(S(\phi,\theta)\) only.

  • Good starting point for MLE



Example: MLE and initial values

## Series: D1 
## ARIMA(1,0,3) with non-zero mean 
## 
## Coefficients:
##          ar1      ma1      ma2      ma3    mean
##       0.8695  -0.0925  -0.2958  -0.1809  1.0607
## s.e.  0.0865   0.1129   0.0921   0.0751  0.1590
## 
## sigma^2 estimated as 0.4809:  log likelihood=-205.24
## AIC=422.47   AICc=422.91   BIC=442.17
## Series: D1 
## ARIMA(1,0,3) with non-zero mean 
## 
## Coefficients:
##          ar1      ma1      ma2      ma3    mean
##       0.8695  -0.0925  -0.2958  -0.1809  1.0607
## s.e.  0.0865   0.1129   0.0921   0.0751  0.1590
## 
## sigma^2 estimated as 0.4809:  log likelihood=-205.24
## AIC=422.47   AICc=422.91   BIC=442.17
## Series: D1 
## ARIMA(1,0,3) with non-zero mean 
## 
## Coefficients:
##          ar1      ma1      ma2      ma3    mean
##       0.8570  -0.0780  -0.2870  -0.1784  1.1006
## s.e.  0.0889   0.1146   0.0916   0.0738  0.1577
## 
## sigma^2 estimated as 0.479:  part log likelihood=-205
## Series: D1 
## ARIMA(1,0,3) with non-zero mean 
## 
## Coefficients:
## Warning in sqrt(diag(x$var.coef)): NaNs produced
##          ar1      ma1      ma2     ma3     mean
##       0.9998  -0.3018  -0.3959  0.0291  19.9009
## s.e.     NaN   0.0738   0.0618  0.0741      NaN
## 
## sigma^2 estimated as 0.5394:  log likelihood=-219.19
## AIC=450.38   AICc=450.82   BIC=470.08
## Series: D1 
## ARIMA(1,0,3) with non-zero mean 
## 
## Coefficients:
##          ar1      ma1      ma2      ma3    mean
##       0.8695  -0.0926  -0.2958  -0.1809  1.0606
## s.e.  0.0865   0.1129   0.0921   0.0751  0.1590
## 
## sigma^2 estimated as 0.4809:  log likelihood=-205.24
## AIC=422.47   AICc=422.91   BIC=442.17



2. Example: Wave tank data


Cowpartwait Ch6.5 The data in the file wave.dat are the surface height of water (mm), relative to the still water level, measured using a capacitance probe positioned at the centre of a wave tank. The continuous voltage signal from this capacitance probe was sampled every 0.1 second over a 39.6-second period. The objective is to fit a suitable ARMA(p, q) model that can be used to generate a realistic wave input to a mathematical model for an ocean-going tugboat in a computer simulation. The results of the computer simulation will be compared with tests using a physical model of the tugboat in the wave tank.

## Warning in if (!header) rlabp <- FALSE: the condition has length > 1 and only the first element
## will be used
## Warning in if (header) {: the condition has length > 1 and only the first element will be used

## Series: Wave 
## ARIMA(3,0,5) with non-zero mean 
## 
## Coefficients:
##          ar1      ar2     ar3      ma1     ma2     ma3     ma4      ma5     mean
##       1.8624  -1.3719  0.2980  -1.6133  0.0287  0.7688  0.1794  -0.3429  -5.0000
## s.e.  0.1110   0.1670  0.0988   0.1047  0.1540  0.1065  0.1353   0.0645   0.7079
## 
## sigma^2 estimated as 19886:  log likelihood=-2520
## AIC=5060.01   AICc=5060.58   BIC=5099.82

##   B-L test H0: the series is uncorrelated
##   M-L test H0: the square of the series is uncorrelated
##   J-B test H0: the series came from Normal distribution
##   SD         : Standard Deviation of the series
##       BL15  BL20  BL25  ML15  ML20 JB      SD
## [1,] 0.078 0.174 0.163 0.959 0.774  0 139.579
## Series: Wave 
## ARIMA(2,0,3) with non-zero mean 
## 
## Coefficients:
##          ar1      ar2      ma1      ma2     ma3     mean
##       1.3656  -0.7935  -1.1230  -0.3632  0.5475  -5.0208
## s.e.  0.0346   0.0333   0.0424   0.0645  0.0398   1.0523
## 
## sigma^2 estimated as 21218:  log likelihood=-2533.95
## AIC=5081.9   AICc=5082.19   BIC=5109.77

##   B-L test H0: the series is uncorrelated
##   M-L test H0: the square of the series is uncorrelated
##   J-B test H0: the series came from Normal distribution
##   SD         : Standard Deviation of the series
##       BL15  BL20  BL25  ML15  ML20 JB      SD
## [1,] 0.007 0.001 0.002 0.953 0.893  0 144.731
## Series: Wave 
## ARIMA(2,0,3) with non-zero mean 
## 
## Coefficients:
##          ar1      ar2      ma1      ma2     ma3     mean
##       1.3656  -0.7935  -1.1230  -0.3632  0.5475  -5.0208
## s.e.  0.0346   0.0333   0.0424   0.0645  0.0398   1.0523
## 
## sigma^2 estimated as 21218:  log likelihood=-2533.95
## AIC=5081.9   AICc=5082.19   BIC=5109.77
##   B-L test H0: the series is uncorrelated
##   M-L test H0: the square of the series is uncorrelated
##   J-B test H0: the series came from Normal distribution
##   SD         : Standard Deviation of the series
##       BL15  BL20  BL25  ML15  ML20 JB      SD
## [1,] 0.007 0.001 0.002 0.953 0.893  0 144.731
## Series: Wave 
## ARIMA(5,0,6) with non-zero mean 
## 
## Coefficients:
##          ar1     ar2      ar3     ar4      ar5      ma1      ma2     ma3     ma4      ma5      ma6
##       1.0560  0.1224  -0.9790  0.5242  -0.1875  -0.7898  -1.3033  0.9377  0.6082  -0.1506  -0.2538
## s.e.  0.1742  0.2170   0.1008  0.2271   0.1266   0.1729   0.1819  0.2595  0.2765   0.1098   0.1123
##          mean
##       -4.9915
## s.e.   0.7463
## 
## sigma^2 estimated as 19765:  log likelihood=-2517.41
## AIC=5060.82   AICc=5061.77   BIC=5112.57

##   B-L test H0: the series is uncorrelated
##   M-L test H0: the square of the series is uncorrelated
##   J-B test H0: the series came from Normal distribution
##   SD         : Standard Deviation of the series
##      BL15  BL20  BL25  ML15  ML20 JB      SD
## [1,]  0.1 0.202 0.238 0.931 0.669  0 138.612
## Series: Wave 
## ARIMA(2,0,3) with zero mean 
## 
## Coefficients:
##          ar1      ar2      ma1      ma2     ma3
##       1.3950  -0.8043  -1.1364  -0.3536  0.5836
## s.e.  0.0329   0.0324   0.0406   0.0635  0.0382
## 
## sigma^2 estimated as 21837:  log likelihood=-2540.1
## AIC=5092.2   AICc=5092.41   BIC=5116.09

##   B-L test H0: the series is uncorrelated
##   M-L test H0: the square of the series is uncorrelated
##   J-B test H0: the series came from Normal distribution
##   SD         : Standard Deviation of the series
##       BL15 BL20  BL25  ML15  ML20 JB      SD
## [1,] 0.003    0 0.001 0.885 0.837  0 145.162
## Series: Wave 
## ARIMA(2,0,3) with zero mean 
## 
## Coefficients:
##          ar1      ar2      ma1      ma2     ma3
##       1.3950  -0.8043  -1.1364  -0.3536  0.5836
## s.e.  0.0329   0.0324   0.0406   0.0635  0.0382
## 
## sigma^2 estimated as 21837:  log likelihood=-2540.1
## AIC=5092.2   AICc=5092.41   BIC=5116.09
## Series: Wave 
## ARIMA(3,0,4) with zero mean 
## 
## Coefficients:
##          ar1      ar2      ar3      ma1      ma2     ma3     ma4
##       1.0981  -0.4305  -0.1755  -0.7581  -0.7075  0.2575  0.3363
## s.e.  0.1604   0.2260   0.1379   0.1506   0.1730  0.0869  0.0925
## 
## sigma^2 estimated as 21399:  log likelihood=-2535.05
## AIC=5086.09   AICc=5086.47   BIC=5117.95

##   B-L test H0: the series is uncorrelated
##   M-L test H0: the square of the series is uncorrelated
##   J-B test H0: the series came from Normal distribution
##   SD         : Standard Deviation of the series
##       BL15  BL20  BL25  ML15  ML20 JB      SD
## [1,] 0.054 0.006 0.007 0.742 0.611  0 143.585
## Series: Wave 
## ARIMA(4,0,5) with zero mean 
## 
## Coefficients:
##          ar1      ar2     ar3      ar4      ma1     ma2     ma3     ma4      ma5
##       2.0338  -1.9094  0.8713  -0.2517  -1.7239  0.4474  0.4753  0.0165  -0.1648
## s.e.  0.1895   0.3853  0.3367   0.1222   0.1895  0.3353  0.1582  0.2028   0.1057
## 
## sigma^2 estimated as 20761:  log likelihood=-2528.32
## AIC=5076.64   AICc=5077.21   BIC=5116.45

##   B-L test H0: the series is uncorrelated
##   M-L test H0: the square of the series is uncorrelated
##   J-B test H0: the series came from Normal distribution
##   SD         : Standard Deviation of the series
##       BL15 BL20  BL25  ML15  ML20 JB      SD
## [1,] 0.023 0.05 0.062 0.671 0.493  0 140.182
## Series: Wave 
## ARIMA(4,0,4) with zero mean 
## 
## Coefficients:
##          ar1      ar2     ar3      ar4      ma1     ma2     ma3     ma4
##       1.6534  -1.5120  0.7441  -0.3344  -1.3300  0.1608  0.1626  0.1081
## s.e.  0.2792   0.4254  0.2928   0.0816   0.2838  0.3397  0.1665  0.1819
## 
## sigma^2 estimated as 20949:  log likelihood=-2530.47
## AIC=5078.95   AICc=5079.41   BIC=5114.78

##   B-L test H0: the series is uncorrelated
##   M-L test H0: the square of the series is uncorrelated
##   J-B test H0: the series came from Normal distribution
##   SD         : Standard Deviation of the series
##       BL15  BL20  BL25  ML15  ML20 JB      SD
## [1,] 0.107 0.025 0.021 0.749 0.622  0 141.528
## Series: Wave 
## ARIMA(5,0,4) with zero mean 
## 
## Coefficients:
##          ar1     ar2      ar3     ar4      ar5      ma1      ma2     ma3     ma4
##       0.8602  0.0001  -0.8088  0.5279  -0.3400  -0.5370  -1.1058  0.6008  0.2112
## s.e.  0.1105  0.1285   0.0600  0.1121   0.0732   0.1138   0.1053  0.1027  0.0996
## 
## sigma^2 estimated as 20943:  log likelihood=-2529.98
## AIC=5079.96   AICc=5080.53   BIC=5119.77

##   B-L test H0: the series is uncorrelated
##   M-L test H0: the square of the series is uncorrelated
##   J-B test H0: the series came from Normal distribution
##   SD         : Standard Deviation of the series
##      BL15  BL20  BL25  ML15  ML20 JB      SD
## [1,] 0.06 0.023 0.021 0.789 0.643  0 141.262
## Series: Wave 
## ARIMA(6,0,4) with zero mean 
## 
## Coefficients:
##          ar1      ar2     ar3      ar4     ar5      ar6      ma1     ma2      ma3     ma4
##       2.9046  -4.5977  4.5505  -3.2545  1.5474  -0.4676  -2.6066  2.8921  -1.6503  0.4296
## s.e.  0.1347   0.3424  0.4465   0.3541  0.1831   0.0570   0.1468  0.3346   0.2912  0.1018
## 
## sigma^2 estimated as 20061:  log likelihood=-2521.19
## AIC=5064.37   AICc=5065.06   BIC=5108.17

##   B-L test H0: the series is uncorrelated
##   M-L test H0: the square of the series is uncorrelated
##   J-B test H0: the series came from Normal distribution
##   SD         : Standard Deviation of the series
##       BL15  BL20  BL25 ML15  ML20 JB      SD
## [1,] 0.456 0.554 0.684 0.67 0.347  0 137.671
## Series: Wave 
## ARIMA(7,0,4) with zero mean 
## 
## Coefficients:
##          ar1     ar2      ar3      ar4     ar5      ar6     ar7      ma1      ma2      ma3     ma4
##       0.5179  0.0387  -0.1449  -0.5541  0.4768  -0.4736  0.1413  -0.1864  -1.0099  -0.4263  0.8501
## s.e.  0.0647  0.0536   0.0517   0.0389  0.0521   0.0535  0.0629   0.0400   0.0325   0.0256  0.0359
## 
## sigma^2 estimated as 20625:  log likelihood=-2527.66
## AIC=5079.31   AICc=5080.12   BIC=5127.09

##   B-L test H0: the series is uncorrelated
##   M-L test H0: the square of the series is uncorrelated
##   J-B test H0: the series came from Normal distribution
##   SD         : Standard Deviation of the series
##       BL15 BL20  BL25  ML15 ML20 JB      SD
## [1,] 0.119 0.02 0.015 0.788 0.69  0 139.866
## Series: Wave 
## ARIMA(8,0,4) with zero mean 
## 
## Coefficients:
##          ar1      ar2      ar3      ar4     ar5      ar6     ar7      ar8      ma1      ma2
##       0.4919  -0.0391  -0.1141  -0.6404  0.4299  -0.4863  0.1590  -0.1449  -0.1452  -0.9680
## s.e.  0.0622   0.0603   0.0510   0.0514  0.0525   0.0515  0.0602   0.0606   0.0422   0.0348
##           ma3     ma4
##       -0.4190  0.8290
## s.e.   0.0308  0.0379
## 
## sigma^2 estimated as 20371:  log likelihood=-2525
## AIC=5076   AICc=5076.95   BIC=5127.76

##   B-L test H0: the series is uncorrelated
##   M-L test H0: the square of the series is uncorrelated
##   J-B test H0: the series came from Normal distribution
##   SD         : Standard Deviation of the series
##       BL15  BL20  BL25  ML15  ML20 JB      SD
## [1,] 0.133 0.134 0.116 0.748 0.533  0 138.692
## Series: Wave 
## ARIMA(8,0,5) with zero mean 
## 
## Coefficients:
##          ar1      ar2      ar3      ar4     ar5      ar6     ar7      ar8     ma1      ma2     ma3
##       1.0051  -0.3517  -0.1294  -0.5895  0.7024  -0.7513  0.4144  -0.2672  -0.671  -0.8337  0.1393
## s.e.  0.1265   0.0904   0.0615   0.0589  0.0842   0.0819  0.0854   0.0592   0.125   0.0387  0.1214
##          ma4      ma5
##       1.0355  -0.4813
## s.e.  0.0589   0.1058
## 
## sigma^2 estimated as 20027:  log likelihood=-2521.2
## AIC=5070.4   AICc=5071.5   BIC=5126.14

##   B-L test H0: the series is uncorrelated
##   M-L test H0: the square of the series is uncorrelated
##   J-B test H0: the series came from Normal distribution
##   SD         : Standard Deviation of the series
##       BL15  BL20  BL25  ML15  ML20 JB      SD
## [1,] 0.319 0.473 0.557 0.778 0.494  0 136.768

When you want to turn off certain parameters

  • not good idea to do here, but it’s possible to keep ARMA(3,5) but turn off \(\theta_2\) and \(\theta_4\).
## Warning in if (!header) rlabp <- FALSE: the condition has length > 1 and only the first element
## will be used
## Warning in if (header) {: the condition has length > 1 and only the first element will be used

## Series: Wave 
## ARIMA(3,0,5) with non-zero mean 
## 
## Coefficients:
##          ar1     ar2     ar3      ma1  ma2     ma3  ma4      ma5     mean
##       1.8743  -1.395  0.3101  -1.6210    0  0.9117    0  -0.2695  -4.9952
## s.e.  0.0629   0.092  0.0571   0.0343    0  0.0650    0   0.0369   0.7309
## 
## sigma^2 estimated as 19933:  log likelihood=-2521.46
## AIC=5058.93   AICc=5059.3   BIC=5090.78



Summary

  • We assume the normality of the error, \(e_t\), and compute MLE of parameters \(\hat \phi_i\) and \(\hat \theta_i\) and \(\hat \sigma\).

  • Since it is MLE, large-sample sample distributions is normal, and large-sample variance of the estimates are known.

  • That means when \(n\) is small, Standard Error from R output may not be accurate.

  • Even though normality was assumed in calculation of MLE algorithm, the estimation is still consistent when \(e_t\) is not normal.

  • MLE is computed numerically, and numerical optimization needs good starting point. If MLE gives you an computation error, try different starting point.

  • CSS is used as starting point in and by default.

  • Calculating AICc, uses CSS value to make the computation faster. \ Use option to turn it off.






TS Class Web PageR resource page